59 research outputs found

    Subsidence of "normal" seafloor: Observations do indicate "flattening"

    Get PDF
    Seafloor topography is a key observational constraint upon the evolution of the oceanic lithosphere. Specifically, plots of oceanic depth (z) versus crustal age (t) for “normal” seafloor are well explained by depth-age predictions of thermal contraction models such as the half-space and cooling plate model. Old seafloor (t > ∌70 Ma) shallower than that predicted by half-space cooling (i.e., z ∝ √t), or “flattening,” is a key but debated discriminator between the two models. Korenaga and Korenaga (2008) in a recent paper find normal seafloor depths of all ages to be consistent with a z ∝ √t model, thus supporting a cooling half-space model for all ages of seafloor. Upon reevaluation, however, the mean depths of their “normal” seafloor flatten at ages >70 Ma, e.g., by 723.2 ± 0.5 m (1 standard error) for t > 110 Ma. This observed inconsistency with the z ∝ √t model is statistically significant (>99.9%) and remains robust (>94%) even if the number of effective independent depth observations is argued to be low (e.g., n = 10). So, if any statistically significant conclusion can be drawn from the observed depths of rare old normal seafloor, it is that old seafloor flattens, which is incompatible with the cooling half-space model applying to all ages of seafloor but does not preclude a cooling-plate style approximation to lithospheric evolution

    Seamount detection and isolation with a modified wavelet transform

    Get PDF
    The size, shape and number of seamounts, once detected and isolated from other features such as oceanic plateaus or trenches, have the potential to provide valuable constraints on important solid Earth processes, e.g. oceanic volcanism. The variability of seamount size and morphology, however, presents problems for computational approaches to seamount isolation. This paper develops a novel and efficient wavelet-based seamount detection routine ‘Spatial Wavelet Transform (SWT)’; the first use of multiple scales of analysis to directly isolate edifices from bathymetric data. Only weak shape-related criteria are used and no a priori knowledge of the scale and location of the seamounts is required. For a bathymetric profile collected on cruise v3312 SWT matches, to within 25%, the dimensions of five times the number of the features determined by manual inspection than does the best statistically based (e.g. mean, median or mode) sliding window filter. The size–frequency distribution, a key descriptor of seamount populations, is also much better estimated by the SWT method. As such, the SWT represents a step towards the goal of objective and robust quantification and classification of seamounts

    Submarine geomorphology: quantitative methods illustrated with the Hawaiian volcanoes

    Get PDF
    Submarine geomorphology, like sub-aerial geomorphology, is the study of the Earth's surface in order to better understand tectonic and geomorphic processes. Such processes include volcanism, neo-tectonics (i.e. the activity of geological faults), the escape of hydrocarbons and submarine erosion (e.g. by channel cutting or landslides). Furthermore, submarine geomorphology can provide valuable input into other fields, such as indicating likely fisheries or habitats for corals.This case study illustrates quantitative methods in submarine geomorphology with 'Regional-Residual Relief Separation', which splits landscapes (digital elevation models) into two components, isolating features of interest in one component for visualisation or analysis as desired: here, isolating Hawaiian volcanoes. Mapping volcanoes and accurately quantifying descriptive properties such as height and volume are vital to constrain our understanding of how the Earth melts and volcanoes erupt. Key future opportunities in submarine geomorphology using quantitative methods are also highlighted

    Plate-like subsidence of the East Pacific Rise - South Pacific Superswell system

    Get PDF
    In previous studies the removal of small-scale features such as seamounts and oceanic islands from bathymetry has revealed a large and unusually shallow region in the South Pacific Ocean, which, at 3000 km wide and up to 1 km high, has been dubbed a “superswell.” These studies use statistical techniques based on finding the modal depth of the bathymetry. Such an analysis, however, does not completely isolate these features, or their associated oceanic plateaus and localized hot spot swells, from the ridge-generated regional bathymetry upon which they are superimposed. Accordingly, a technique is required that passes beneath topographic constructs rather than through them, as is the tendency of the mean, median, or mode. We have developed an algorithm, MiMIC, that reproducibly removes all these features and reveals the large-scale bathymetric trends in a manner based upon and consistent with manual interpretation. Application of the algorithm to bathymetry data in the southwest Pacific shows that the depth anomaly with respect to a cooling plate model changes steadily from being too deep at the East Pacific Rise (EPR) crest to being too shallow at the superswell. The largest shallow anomaly of 712 ± 66 m occurs at 98 Ma, not 1300 m at 65 Ma, as has been previously suggested. Most significantly, the superswell appears to be part of a large-scale, “plate-like,” subsidence that extends to the EPR crest, rather than an isolated shallowing that reverses the subsidence and causes uplift. We interpret the plate-like subsidence as due in part to cooling of the oceanic lithosphere and in part to a lateral temperature gradient in the underlying asthenosphere which is maintained by the flow of relatively hot material from beneath the superswell toward the relative cold material beneath the EPR. The best fit model implies a lateral temperature gradient of 0.014°C km−1 and is in general accord with the available effective elastic thickness, crustal thickness, heat flow, and seismic tomography data

    Testing 3D landform quantification methods with synthetic drumlins in a real digital elevation model

    Get PDF
    Metrics such as height and volume quantifying the 3D morphology of landforms are important observations that reflect and constrain Earth surface processes. Errors in such measurements are, however, poorly understood. A novel approach, using statistically valid ‘synthetic’ landscapes to quantify the errors is presented. The utility of the approach is illustrated using a case study of 184 drumlins observed in Scotland as quantified from a Digital Elevation Model (DEM) by the ‘cookie cutter’ extraction method. To create the synthetic DEMs, observed drumlins were removed from the measured DEM and replaced by elongate 3D Gaussian ones of equivalent dimensions positioned randomly with respect to the ‘noise’ (e.g. trees) and regional trends (e.g. hills) that cause the errors. Then, errors in the cookie cutter extraction method were investigated by using it to quantify these ‘synthetic’ drumlins, whose location and size is known. Thus, the approach determines which key metrics are recovered accurately. For example, mean height of 6.8 m is recovered poorly at 12.5 ± 0.6 (2σ) m, but mean volume is recovered correctly. Additionally, quantification methods can be compared: A variant on the cookie cutter using an un-tensioned spline induced about twice (× 1.79) as much error. Finally, a previously reportedly statistically significant (p = 0.007) difference in mean volume between sub-populations of different ages, which may reflect formational processes, is demonstrated to be only 30–50% likely to exist in reality. Critically, the synthetic DEMs are demonstrated to realistically model parameter recovery, primarily because they are still almost entirely the original landscape. Results are insensitive to the exact method used to create the synthetic DEMs, and the approach could be readily adapted to assess a variety of landforms (e.g. craters, dunes and volcanoes)

    An integration to optimally constrain the thermal structure of oceanic lithosphere

    Get PDF
    [1] The evolution through time of the oceanic lithosphere is a substantial, incompletely resolved geodynamical problem. Consensus remains elusive regarding its thermal structure, physical properties, and the best model through which to unify observational constraints. We robustly reevaluate all three of these by (i) simultaneously fitting heat flow, bathymetry, and temperatures derived from a shear velocity model of the upper mantle, (ii) using the three main thermal models (half-space, plate, and Chablis), and (iii) analyzing five depth-age curves, wherein contrasting techniques were used to exclude anomalous features from seafloor depths. The thermal models are updated to all include a temperature-dependent heat capacity, a temperature- and pressure-dependent thermal conductivity, and an initial condition of adiabatic decompression including melting. The half-space model, which lets the lithosphere thicken indefinitely, cannot accurately fit the subsidence curves and requires mantle potential temperatures, Tm, that are too high. On the other hand, the models including a mechanism of basal heat supply are able to simultaneously explain all observations within two standard errors, with best-fitting parameters robust to the choice of the filtered bathymetry curve. For the plate model, which imposes a constant temperature at a fixed depth, Tm varies within 1380–1390°C, the equilibrium plate thickness a within 106–110 km, and the bulk thermal expansivity inline image within 2.95−3.20 ⋅ 10−5 K−1. For the Chablis model, which prescribes a fixed heat flow at the base of a thickening lithosphere, the best-fitting values are Tm = 1320−1380°C, a = 176−268 km, inline image K−1. Driven by more accurate ocean depths, the plate model provides better joint-fittings to the observations; however, it requires values of inline image lower than experimentally measured, which can be explained by a reduction of the apparent expansivity due to elastic rigidity of the upper lithosphere. The Chablis model better fits the data when inline image is set close to or above the experimental values. Although statistically consistent within two standard errors, a tendency toward incompatibility between observed depth-age curves and seismically derived temperatures is revealed with new clarity, because the latter do not exhibit a clear steady state whereas the former flatten; further work is needed to identify the origin of this apparent discrepancy.This work opens the way to investigations fully independent of particular solutions of the heat equation

    Examining the impact of visual teaching and learning strategies on undergraduate students self-reported experience of quantitative research methods teaching: update from the Loughborough Project

    Get PDF
    Examining the impact of visual teaching and learning strategies on undergraduate students self-reported experience of quantitative research methods teaching: update from the Loughborough Projec

    Frontiers in geomorphometry and earth surface dynamics: possibilities, limitations and perspectives

    Get PDF
    Geomorphometry, the science of quantitative land-surface analysis, has become a flourishing interdisciplinary subject, with applications in numerous fields. The interdisciplinarity of geomorphometry is its greatest strength and also one of its major challenges. Gaps are still present between the process focussed fields (e.g. soil science, glaciology, volcanology) and the technical domain (such as computer science, statistics 
) where approaches and theories are developed. Thus, interesting geomorphometric applications struggle to jump between process-specific disciplines, but also struggle to take advantage of advances in computer science and technology. This special issue is therefore focused on facilitating cross-fertilization between disciplines, and highlighting novel technical developments and innovative applications of geomorphometry to various Earth-surface processes. The issue collects a variety of contributions which fall into two main categories: Perspectives and Research, further divided into “Research and innovative techniques” and “Research and innovative applications”. It showcases potentially exciting developments and tools which are the building blocks for the next step-change in the field

    ‘A picture is worth ten thousand words’: a module to test the ‘visualization hypothesis’ in quantitative methods teaching

    Get PDF
    Inadequate quantitative methods (QM) training provision for undergraduate social science students in the United Kingdom is a well-known problem. This paper reports on the design, implementation and assessment of an induction module created to test the hypothesis that visualization helps students learn key statistical concepts. The induction module is a twelve-week compulsory unit taught to first year social science students at a UK university, which they complete prior to a more traditional statistical, workshop-based QM module. A component of the induction module focuses on the use of visualization through Geographic Information Systems (GIS), to teach the process of hypothesis generation to students while they also are introduced to the basics of QM research design and univariate and bivariate forms of data analysis. Self-reflexive evaluation indicates that visualization could assist students with more advanced QM statistical skills

    Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models

    Get PDF
    Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the reliability, validity and applicability of results. © 2014 The Authors
    • 

    corecore